decompose neural computation
Modular Networks: Learning to Decompose Neural Computation
Scaling model capacity has been vital in the success of deep learning. For a typical network, necessary compute resources and training time grow dramatically with model size. Conditional computation is a promising way to increase the number of parameters with a relatively small increase in resources. We propose a training algorithm that flexibly chooses neural modules based on the data to be processed. Both the decomposition and modules are learned end-to-end. In contrast to existing approaches, training does not rely on regularization to enforce diversity in module use. We apply modular networks both to image recognition and language modeling tasks, where we achieve superior performance compared to several baselines. Introspection reveals that modules specialize in interpretable contexts.
Reviews: Modular Networks: Learning to Decompose Neural Computation
The paper is concerned with conditional computation, which is an interesting topic yet at early stages of research, and as such one that requires much research and investigation. The paper proposes a latent-variable approach to constructing modular networks, modeling the choice of processing modules in a layer as a discrete latent variable. A modular network is composed of L modular layers, each comprised of M modules and a controller. Each module is a function (standard layer) f_i(x; \theta_i). The controller accepts the input, chooses K of the M modules to process the input, and outputs the as the module output. Modular layers can be stacked, or placed anywhere inside a standard network.
Modular Networks: Learning to Decompose Neural Computation
Kirsch, Louis, Kunze, Julius, Barber, David
Scaling model capacity has been vital in the success of deep learning. For a typical network, necessary compute resources and training time grow dramatically with model size. Conditional computation is a promising way to increase the number of parameters with a relatively small increase in resources. We propose a training algorithm that flexibly chooses neural modules based on the data to be processed. Both the decomposition and modules are learned end-to-end.